12 research outputs found

    Noise estimation for hyperspectral subspace identification on FPGAs

    Full text link
    [EN] We present a reliable and efficient FPGA implementation of a procedure for the computation of the noise estimation matrix, a key stage for subspace identification of hyperspectral images. Our hardware realization is based on numerically stable orthogonal transformations, avoids the numerical difficulties of the normal equations method for the solution of linear least squares problems (LLS), and exploits the special relations between coupled LLS problems arising in the hyperspectral image. Our modular implementation decomposes the QR factorization that comprises a significant part of the cost into a sequence of suboperations, which can be efficiently computed on an FPGA.This work was supported by MINECO Projects TIN2014-53495-R and TIN2013-40968-P.León, G.; González, C.; Mayo Gual, R.; Mozos, D.; Quintana-Ortí, ES. (2019). Noise estimation for hyperspectral subspace identification on FPGAs. The Journal of Supercomputing. 75(3):1323-1335. https://doi.org/10.1007/s11227-018-2425-313231335753Anderson E et al (1999) E LAPACK users’ guide, 3rd edn. SIAM, PhiladelphiaBenner P, Novaković V, Plaza A, Quintana-Ortí ES, Remón A (2015) Fast and reliable noise estimation for Hyperspectral subspace identification. IEEE Geosci Remote Sens Lett 12(6):1199–1203Bioucas-Dias J, Nascimento J (2008) Hyperspectral subspace identification. IEEE Trans Geosci Remote Sens 46:2435–2445Bioucas-Dias J, Plaza A, Dobigeon N, Parente M, Du Q, Gader P, Chanussot J (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE JSTARS 5(2):354–379Björck A (1996) Numerical methods for least squares problems. Society for Industrial and Applied Mathematics (SIAM), PhiladelphiaGunnels JA, Gustavson FG, Henry GM, van de Geijn RA (2001) FLAME: formal linear algebra methods environment. ACM Trans Math Softw 27(4):422–455. https://doi.org/10.1145/504210.504213Kerekes J, Baum J (2002) Spectral imaging system analytical model for subpixel object detection. IEEE Trans Geosci Remote Sens 40(5):1088–1101León G, González C, Mayo R, Quintana-Ortí ES, Mozos D (2017) Energy-efficient QR factorization on FPGAs. In: Proceedings of 17th International Conference on Computational and Mathematical Methods in Science and Engineering (CMMSE 2017), Cádiz, Spai

    On the adequacy of lightweight thread approaches for high-level parallel programming models

    Get PDF
    High-level parallel programming models (PMs) are becoming crucial in order to extract the computational power of current on-node multi-threaded parallelism. The most popular PMs, such as OpenMP or OmpSs, are directive-based: the complexity of the hardware is hidden by the underlying runtime system, improving coding productivity. The implementations of OpenMP usually rely on POSIX threads (pthreads), offering excellent performance for coarse-grained parallelism and a perfect match with the current hardware. OmpSs is a task oriented PM based on an ad hoc runtime solution called Nanos++; it is the precursor of the tasking parallelism in the OpenMP tasking specification. A recent trend in runtimes and applications points to leveraging massive on-node parallelism in conjunction with fine-grained and dynamic scheduling paradigms. In this paper we analyze the behavior of the OpenMP and OmpSs PMs on top of the recently emerged Generic Lightweight Threads (GLT) API. GLT exposes a common API for lightweight thread (LWT) libraries that offers the possibility of running the same application over different native LWT solutions. We describe the design details of those high-level PMs implemented on top of GLT and analyze different scenarios in order to assess where the use of LWTs may benefit application performance. Our work reveals those scenarios where LWTs overperform pthread-based solutions and compares the performance between an ad hoc solution and a generic implementation.The researchers from the Universitat Jaume I de Castelló were supported by project TIN2014-53495-R of the MINECO, Spain and FEDER, Spain, the Generalitat Valenciana fellowship programme, Spain Vali+d 2015. Antonio J. Peña is cofinanced by the Spanish Ministry of Economy and Competitiveness, Spain under Juan de la Cierva fellowship number IJCI-2015-23266. This work was partially supported by the U.S. Dept. of Energy, Office of Science, Office of Advanced Scientific Computing Research (SC-21), under contract DE-AC02-06CH11357. We gratefully acknowledge Enrique S. Quintana-Ortí (Universitat Jaume I) and Sangmin Seo (Samsung Corp.) for their advice in this work and the computing resources provided and operated by the Joint Laboratory for System Evaluation (JLSE) at Argonne National Laboratory.Peer ReviewedPostprint (author's final draft

    Analysis of threading libraries for high performance computing

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permissíon from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertisíng or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.[EN] With the appearance of multi-/many core machines, applications and runtime systems have evolved in order to exploit the new on-node concurrency brought by new software paradigms. POSIX threads (Pthreads) was widely-adopted for that purpose and it remains as the most used threading solution in current hardware. Lightweight thread (LWT) libraries emerged as an alternative offering lighter mechanisms to tackle the massive concurrency of current hardware. In this article, we analyze in detail the most representative threading libraries including Pthread- and LWT-based solutions. In addition, to examine the suitability of LWTs for different use cases, we develop a set of microbenchmarks consisting of OpenMP patterns commonly found in current parallel codes, and we compare the results using threading libraries and OpenMP implementations. Moreover, we study the semantics offered by threading libraries in order to expose the similarities among different LWT application programming interfaces and their advantages over Pthreads. This article exposes that LWT libraries outperform solutions based on operating system threads when tasks and nested parallelism are required.The researchers from the Universitat Jaume I and Universitat Politecnica de Valencia were supported by project TIN2014-53495-R of the MINECO and FEDER, and the Generalitat Valenciana fellowship programme Vali+d 2015. Antonio J. Pena is financed by the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant No. 749516. This work was partially supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (SC-21), under contract DE-AC02-06CH11357.Castelló, A.; Mayo Gual, R.; Seo, S.; Balaji, P.; Quintana Ortí, ES.; Peña, AJ. (2020). Analysis of threading libraries for high performance computing. IEEE Transactions on Computers. 69(9):1279-1292. https://doi.org/10.1109/TC.2020.2970706S1279129269

    DMR API: Improving cluster productivity by turning applications into malleable

    Get PDF
    [EN] Adaptive workloads can change on-the-fly the configuration of their jobs, in terms of number of processes. To carry out these job reconfigurations, we have designed a methodology which enables a job to communicate with the resource manager and, through the runtime. to change its number of MPI ranks. The collaboration between both the workload manager-aware of the queue of jobs and the resources allocation-and the parallel runtime-able to transparently handle the processes and the program data-is crucial for our throughput-aware malleability methodology. Hence, when a job triggers a reconfiguration, the resource manager will check the cluster status and return the appropriate action: i) expand, if there are spare resources; ii) shrink, if queued jobs can be initiated; or iii) none, if no change can improve the global productivity. In this paper, we describe the internals of our framework and demonstrate how it reduces the global workload completion time along with providing a more efficient usage of the underlying resources. For this purpose, we present a thorough study of the adaptive workloads processing by showing the detailed behavior of our framework in representative experiments. (C) 2018 Elsevier B.V. All rights reserved.Iserte Agut, S.; Mayo Gual, R.; Quintana Ortí, ES.; Beltrán, V.; Peña Monferrer, AJ. (2018). DMR API: Improving cluster productivity by turning applications into malleable. Parallel Computing. 78:54-66. https://doi.org/10.1016/j.parco.2018.07.006S54667

    Exploring the interoperability of remote GPGPU virtualization using rCUDA and directive-based programming models

    Get PDF
    [EN] Directive-based programming models, such as OpenMP, OpenACC, and OmpSs, enable users to accelerate applications by using coprocessors with little effort. These devices offer significant computing power, but their use can introduce two problems: an increase in the total cost of ownership and their underutilization because not all codes match their architecture. Remote accelerator virtualization frameworks address those problems. In particular, rCUDA provides transparent access to any graphic processor unit installed in a cluster, reducing the number of accelerators and increasing their utilization ratio. Joining these two technologies, directive-based programming models and rCUDA, is thus highly appealing. In this work, we study the integration of OmpSs and OpenACC with rCUDA, describing and analyzing several applications over three different hardware configurations that include two InfiniBand interconnections and three NVIDIA accelerators. Our evaluation reveals favorable performance results, showing low overhead and similar scaling factors when using remote accelerators instead of local devices.The researchers from the Universitat Jaume I de Castello were supported by Universitat Jaume I research project (P11B2013-21), project TIN2014-53495-R, a Generalitat Valenciana grant and FEDER. The researcher from the Barcelona Supercomputing Center (BSC-CNS) Lausanne was supported by the European Commission (HiPEAC-3 Network of Excellence, FP7-ICT 287759), Intel-BSC Exascale Lab collaboration, IBM/BSC Exascale Initiative collaboration agreement, Computacion de Altas Prestaciones VI (TIN2012-34557) and the Generalitat de Catalunya (2014-SGR-1051). This work was partially supported by the U.S. Dept. of Energy, Office of Science, Office of Advanced Scientific Computing Research (SC-21), under contract DE-AC02-06CH11357. The initial version of rCUDA was jointly developed by Universitat Politecnica de Valencia (UPV) and Universitat Jaume I de Castellon (UJI) until year 2010. This initial development was later split into two branches. Part of the UPV version was used in this paper. The development of the UPV branch was supported by Generalitat Valenciana under Grants PROMETEO 2008/060 and Prometeo II 2013/009. We gratefully acknowledge the computing resources provided and operated by the Joint Laboratory for System Evaluation (JLSE) at Argonne National Laboratory.Castelló-Gimeno, A.; Peña Monferrer, AJ.; Mayo Gual, R.; Planas, J.; Quintana Ortí, ES.; Balaji, P. (2018). Exploring the interoperability of remote GPGPU virtualization using rCUDA and directive-based programming models. The Journal of Supercomputing. 74(11):5628-5642. https://doi.org/10.1007/s11227-016-1791-yS562856427411Strohmaier E, Dongarra J, Simon H, Meuer M (2015) TOP500 supercomputing sites. http://www.top500.org/lists/2015/11 . Accessed Nov 2015NVIDIA (2015) CUDA API reference, version 7.5Shreiner D, Sellers G, Kessenich JM, Licea-Kane BM (2013) OpenGL programming guide: the official guide to learning OpenGL. Addison-Wesley Professional, BostonMark WR, Glanville RS, Akeley K, Kilgard MJ (2003) Cg: a system for programming graphics hardware in a C-like language. ACM Trans Graph (TOG) 22(3):896–907Munshi A (2014)The OpenCL specification 2.0. 0.5em minus 0.4em Khronos OpenCL working groupOpenACC directives for accelerators (2015). http://www.openacc-standard.org . Accessed Dec 2015OmpSs project home page. http://pm.bsc.es/ompss . Accessed Dec 2015OpenMP application program interface 4.0 (2013). OpenMP Architecture BoardPeña AJ (2013) Virtualization of accelerators in high performance clusters. Ph.D. dissertation, Universitat Jaume I, CastellónKawai A, Yasuoka K, Yoshikawa K, Narumi T (2012) Distributed-shared CUDA: virtualization of large-scale GPU systems for programmability and reliability. In: International conference on future computational technologies and applicationsShi L, Chen H, Sun J, Li K (2012) vCUDA: GPU-accelerated high-performance computing in virtual machines. IEEE Trans Comput 61(6):804–816Xiao S, Balaji P, Zhu Q, Thakur R, Coghlan S, Lin H, Wen G, Hong J, Feng W (2012) VOCL: an optimized environment for transparent virtualization of graphics processing units. In: Innovative parallel computing. IEEE, New YorkKim J, Seo S, Lee J, Nah J, Jo G, Lee J (2012) SnuCL: an OpenCL framework for heterogeneous CPU/GPU clusters. In: International conference on supercomputingDuran A, Ayguadé E, Badia RM, Labarta J, Martinell L, Martorell X, Planas J (2011) OmpSs: a proposal for programming heterogeneous multi-core architectures. Parallel Process Lett 21(02):173–193Castelló A, Duato J, Mayo R, Peña AJ, Quintana-Ortí ES, Roca V, Silla F (2014) On the use of remote GPUs and low-power processors for the acceleration of scientific applications. In: The fourth international conference on smart grids, green communications and IT energy-aware technologies, pp 57–62Iserte S, Castelló A, Mayo R, Quintana-Ortí ES, Reaño C, Prades J, Silla F, Duato J (2014) SLURM support for remote GPU virtualization: implementation and performance study. In: International symposium on computer architecture and high performance computing (SBAC-PAD)Peña AJ, Reaño C, Silla F, Mayo R, Quintana-Ortí ES, Duato J (2014) A complete and efficient CUDA-sharing solution for HPC clusters. Parallel Comput 40(10):574–588Kegel P, Steuwer M, Gorlatch S (2012) dOpenCL: towards a uniform programming approach for distributed heterogeneous multi-/many-core systems. In: International parallel and distributed processing symposium workshops (IPDPSW)Castelló A, Peña AJ, Mayo R, Balaji P, Quintana-Ortí ES (2015) Exploring the suitability of remote GPGPU virtualization for the OpenACC programming model using rCUDA. In: IEEE international conference on cluster computingCastelló A, Mayo R, Planas J, Quintana-Ortí ES (2015) Exploiting task-parallelism on GPU clusters via OmpSs and rCUDA virtualization. In: IEEE international workshop on reengineering for parallelism in heterogeneous parallel platformsHP Corp., Intel Corp., Microsoft Corp., Phoenix Tech. Ltd., Toshiba Corp. (2011) Advanced configuration and power interface specification, revision 5.0Reaño C, Silla F, Castelló A, Peña AJ, Mayo R, Quintana-Ortí ES, Duato J (2014) Improving the user experience of the rCUDA remote GPU virtualization framework. Concurr Comput 27(14):3746–3770PGI compilers and tools (2015) http://www.pgroup.com/ . Accessed Dec 2015Johnson N (2013) EPCC OpenACC benchmark suite. https://www.epcc.ed.ac.uk/ . Accessed Dec 2015Herdman J, Gaudin W, McIntosh-Smith S, Boulton M, Beckingsale D, Mallinson A, Jarvis SA (2012) Accelerating hydrocodes with OpenACC, OpenCL and CUDA. In: SC companion: high performance computing, networking, storage and analysi

    Improving the User Experience of the rCUDA Remote GPU Virtualization Framework

    Get PDF
    Graphics processing units (GPUs) are being increasingly embraced by the high-performance computing community as an effective way to reduce execution time by accelerating parts of their applications. remote CUDA (rCUDA) was recently introduced as a software solution to address the high acquisition costs and energy consumption of GPUs that constrain further adoption of this technology. Specifically, rCUDA is a middleware that allows a reduced number of GPUs to be transparently shared among the nodes in a cluster. Although the initial prototype versions of rCUDA demonstrated its functionality, they also revealed concerns with respect to usability, performance, and support for new CUDA features. In response, in this paper, we present a new rCUDA version that (1) improves usability by including a new component that allows an automatic transformation of any CUDA source code so that it conforms to the needs of the rCUDA framework, (2) consistently features low overhead when using remote GPUs thanks to an improved new communication architecture, and (3) supports multithreaded applications and CUDA libraries. As a result, for any CUDA-compatible program, rCUDA now allows the use of remote GPUs within a cluster with low overhead, so that a single application running in one node can use all GPUs available across the cluster, thereby extending the single-node capability of CUDA. Copyright © 2014 John Wiley & Sons, Ltd.This work was funded by the Generalitat Valenciana under Grant PROMETEOII/2013/009 of the PROMETEO program phase II. The author from Argonne National Laboratory was supported by the US Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357. The authors are also grateful for the generous support provided by Mellanox Technologies.Reaño González, C.; Silla Jiménez, F.; Castello Gimeno, A.; Peña Monferrer, AJ.; Mayo Gual, R.; Quintana Ortí, ES.; Duato Marín, JF. (2015). Improving the User Experience of the rCUDA Remote GPU Virtualization Framework. Concurrency and Computation: Practice and Experience. 27(14):3746-3770. https://doi.org/10.1002/cpe.3409S374637702714NVIDIA NVIDIA industry cases http://www.nvidia.es/object/tesla-case-studiesFigueiredo, R., Dinda, P. A., & Fortes, J. (2005). Guest Editors’ Introduction: Resource Virtualization Renaissance. Computer, 38(5), 28-31. doi:10.1109/mc.2005.159Duato J Igual FD Mayo R Peña AJ Quintana-Ortí ES Silla F An efficient implementation of GPU virtualization in high performance clusters Euro-Par 2009 Workshops, ser. LNCS, 6043 Delft, Netherlands, 385 394Duato J Peña AJ Silla F Mayo R Quintana-Ortí ES Performance of CUDA virtualized remote GPUs in high performance clusters International Conference on Parallel Processing, Taipei, Taiwan 2011 365 374Duato J Peña AJ Silla F Fernández JC Mayo R Quintana-Ortí ES Enabling CUDA acceleration within virtual machines using rCUDA International Conference on High Performance Computing, Bangalore, India 2011 1 10Shi, L., Chen, H., Sun, J., & Li, K. (2012). vCUDA: GPU-Accelerated High-Performance Computing in Virtual Machines. IEEE Transactions on Computers, 61(6), 804-816. doi:10.1109/tc.2011.112Gupta V Gavrilovska A Schwan K Kharche H Tolia N Talwar V Ranganathan P GViM: GPU-accelerated virtual machines 3rd Workshop on System-Level Virtualization for High Performance Computing, Nuremberg, Germany 2009 17 24Giunta G Montella R Agrillo G Coviello G A GPGPU transparent virtualization component for high performance computing clouds Euro-Par 2010 - Parallel Processing, 6271 Ischia, Italy, 379 391Zillians VGPU http://www.zillians.com/vgpuLiang TY Chang YW GridCuda: a grid-enabled CUDA programming toolkit Proceedings of the 25th IEEE International Conference on Advanced Information Networking and Applications Workshops (WAINA), Biopolis, Singapore 2011 141 146Barak A Ben-Nun T Levy E Shiloh A Apackage for OpenCL based heterogeneous computing on clusters with many GPU devices Workshop on Parallel Programming and Applications on Accelerator Clusters, Heraklion, Crete, Greece 2010 1 7Xiao S Balaji P Zhu Q Thakur R Coghlan S Lin H Wen G Hong J Feng W-C VOCL: an optimized environment for transparent virtualization of graphics processing units Proceedings of InPar, San Jose, California, USA 2012 1 12Kim J Seo S Lee J Nah J Jo G Lee J SnuCL: an OpenCL framework for heterogeneous CPU/GPU clusters Proceedings of the 26th International Conference on Supercomputing, Venice, Italy 2012 341 352NVIDIA The NVIDIA CUDA Compiler Driver NVCC Version 5, NVIDIA 2012Quinlan D Panas T Liao C ROSE http://rosecompiler.org/Free Software Foundation, Inc. GCC, the GNU Compiler Collection http://gcc.gnu.org/LLVM Clang: a C language family frontend for LLVM http://clang.llvm.org/Martinez G Feng W Gardner M CU2CL: a CUDA-to-OpenCL Translator for Multi- and Many-core Architectures http://eprints.cs.vt.edu/archive/00001161/01/CU2CL.pdfLLVM The LLVM compiler infrastructure http://llvm.org/Reaño C Peña AJ Silla F Duato J Mayo R Quintana-Orti ES CU2rCU: towards the complete rCUDA remote GPU virtualization and sharing solution Proceedings of the 19th International Conference on High Performance Computing (HiPC), Pune, India 2012 1 10NVIDIA The NVIDIA GPU Computing SDK Version 4, NVIDIA 2011Sandia National Labs LAMMPS molecular dynamics simulator http://lammps.sandia.gov/Citrix Systems, Inc. Xen http://xen.org/Peña AJ Virtualization of accelerators in high performance clusters Ph.D. Thesis, 2013NVIDIA CUDA profiler user's guide version 5, NVIDIA 2012Igual, F. D., Chan, E., Quintana-Ortí, E. S., Quintana-Ortí, G., van de Geijn, R. A., & Van Zee, F. G. (2012). The FLAME approach: From dense linear algebra algorithms to high-performance multi-accelerator implementations. Journal of Parallel and Distributed Computing, 72(9), 1134-1143. doi:10.1016/j.jpdc.2011.10.014Slurm workload manager http://slurm.schedmd.co

    SLURM Support for Remote GPU Virtualization: Implementation and Performance Study

    Full text link
    © 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.SLURM is a resource manager that can be leveraged to share a collection of heterogeneous resources among the jobs in execution in a cluster. However, SLURM is not designed to handle resources such as graphics processing units (GPUs). Concretely, although SLURM can use a generic resource plugin (GRes) to manage GPUs, with this solution the hardware accelerators can only be accessed by the job that is in execution on the node to which the GPU is attached. This is a serious constraint for remote GPU virtualization technologies, which aim at providing a user-transparent access to all GPUs in cluster, independently of the specific location of the node where the application is running with respect to the GPU node. In this work we introduce a new type of device in SLURM, "rgpu", in order to gain access from any application node to any GPU node in the cluster using rCUDA as the remote GPU virtualization solution. With this new scheduling mechanism, a user can access any number of GPUs, as SLURM schedules the tasks taking into account all the graphics accelerators available in the complete cluster. We present experimental results that show the benefits of this new approach in terms of increased flexibility for the job scheduler.The researchers at UPV were supported by the the Generalitat Valenciana under Grant PROMETEOII/2013/009 of the PROMETEO program phase II. Researchers at UJI were supported by MINECO, by FEDER funds under Grant TIN2011-23283, and by the Fundacion Caixa-Castelló Bancaixa (Grant P11B2013-21).Iserte Agut, S.; Castello Gimeno, A.; Mayo Gual, R.; Quintana Ortí, ES.; Silla Jiménez, F.; Duato Marín, JF.; Reaño González, C.... (2014). SLURM Support for Remote GPU Virtualization: Implementation and Performance Study. En Computer Architecture and High Performance Computing (SBAC-PAD), 2014 IEEE 26th International Symposium on. IEEE. 318-325. https://doi.org/10.1109/SBAC-PAD.2014.49S31832

    Gestión de entregables con grupos grandes

    Get PDF
    En el Espacio Europeo de Educación Superior (EEES), el docente debe asumir la responsabilidad de planificar el trabajo que tiene que realizar el estudiante, tanto dentro como fuera del aula, y además, evaluar el trabajo realizado, preferentemente de forma continua. Para ello, el profesor puede optar por preparar un conjunto de documentos donde se expliciten las actividades que el estudiante debe realizar a lo largo del curso y que irá entregando para su corrección (en adelante, llamaremos entregables a dichos documentos). El problema que plantea la utilización de entregables es que su gestión es un proceso bastante costoso, especialmente cuando el número de estudiantes por grupo es elevado. En este artículo se presenta una aplicación de gestión de entregables que reduce notablemente el tiempo que el profesor debe dedicar a su gestión, permitiendo que sea posible utilizar entregables incluso con grupos grandes. Se propone también un método de evaluación en el que el porcentaje de entregables realizados por el estudiante forma parte de su evaluación. Tanto la aplicación de gestión como el método de evaluación propuestos han sido probados en varias asignaturas con resultados satisfactorios.Este proyecto docente ha sido financiado por la Unitat de Suport Educatiu de la Universidad Jaume I (código del proyecto: 05G073-400)

    Aplicación para la gestión y calificación de actividades ECTS

    Get PDF
    En este artículo se describe una aplicación web que facilita la gestión y calificación de las actividades ECTS realizadas por los estudiantes. La principal ventaja de esta aplicación es que permite el seguimiento continuado del trabajo realizado por los estudiantes, aún con grupos grandes. La aplicación está siendo utilizada actualmente en 9 asignaturas del Grado en Ingeniería Informática de la Universidad Jaume I, gestionando más de 200 actividades diferentes por curso.This paper describes a web application that facilitates the management and assesment of the ECTS activities performed by students. The main advantage of this application is that it allows the continuous monitoring of the work performed by students, even with big groups. The application is currently being used in 9 courses of the degree in Computer Engineering at the Jaume I University, managing more than 200 different activities per course.Este trabajo ha sido parcialmente financiado por la Unitat de Suport Educatiu de la Universidad Jaume I, en el marco del Proyecto de Innovación Educativa 10G136-329

    Rendimiento académico de los estudios de informática en algunos centros españoles

    Get PDF
    Se presentan los resultados de un estudio sobre el rendimiento académico de los estudios de Informática (ingenierías técnicas e ingeniería superior) en ocho universidades públicas españolas. Se estudian separadamente los alumnos de nuevo ingreso y el total de alumnos de una titulación, desagregados por sexo, edad, procedencia y nota de ingreso. Se calculan valores medios y agregados para cada titulación, teniendo en cuenta que cada una de las universidades participantes aportó aquellos datos de los que podía disponer. Los indicadores calculados se refieren a tasas de éxito, rendimiento, abandono, demanda en 1ª opción y duración media de los estudios.Peer Reviewe
    corecore